405 research outputs found
Sparse Iterative Learning Control with Application to a Wafer Stage: Achieving Performance, Resource Efficiency, and Task Flexibility
Trial-varying disturbances are a key concern in Iterative Learning Control
(ILC) and may lead to inefficient and expensive implementations and severe
performance deterioration. The aim of this paper is to develop a general
framework for optimization-based ILC that allows for enforcing additional
structure, including sparsity. The proposed method enforces sparsity in a
generalized setting through convex relaxations using norms. The
proposed ILC framework is applied to the optimization of sampling sequences for
resource efficient implementation, trial-varying disturbance attenuation, and
basis function selection. The framework has a large potential in control
applications such as mechatronics, as is confirmed through an application on a
wafer stage.Comment: 12 pages, 14 figure
Reduced Complexity Filtering with Stochastic Dominance Bounds: A Convex Optimization Approach
This paper uses stochastic dominance principles to construct upper and lower
sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using
convex optimization methods for nuclear norm minimization with copositive
constraints, we construct low rank stochastic marices so that the optimal
filters using these matrices provably lower and upper bound (with respect to a
partially ordered set) the true filtered distribution at each time instant.
Since these matrices are low rank (say R), the computational cost of evaluating
the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance
sampling filter is presented that exploits these upper and lower bounds to
estimate the optimal posterior. Finally, using the Dobrushin coefficient,
explicit bounds are given on the variational norm between the true posterior
and the upper and lower bounds
Minority Challenge of Majority Actions in a Close Corporation in Italy and the United States
This paper addresses the problem of segmenting a time-series with respect to changes in the mean value or in the variance. The first case is when the time data is modeled as a sequence of independent and normal distributed random variables with unknown, possibly changing, mean value but fixed variance. The main assumption is that the mean value is piecewise constant in time, and the task is to estimate the change times and the mean values within the segments. The second case is when the mean value is constant, but the variance can change. The assumption is that the variance is piecewise constant in time, and we want to estimate change times and the variance values within the segments. To find solutions to these problems, we will study an l_1 regularized maximum likelihood method, related to the fused lasso method and l_1 trend filtering, where the parameters to be estimated are free to vary at each sample. To penalize variations in the estimated parameters, the -norm of the time difference of the parameters is used as a regularization term. This idea is closely related to total variation denoising. The main contribution is that a convex formulation of this variance estimation problem, where the parametrization is based on the inverse of the variance, can be formulated as a certain mean estimation problem. This implies that results and methods for mean estimation can be applied to the challenging problem of variance segmentation/estimationQC 20140908</p
A Class of Nonconvex Penalties Preserving Overall Convexity in Optimization-Based Mean Filtering
mean filtering is a conventional, optimization-based method to
estimate the positions of jumps in a piecewise constant signal perturbed by
additive noise. In this method, the norm penalizes sparsity of the
first-order derivative of the signal. Theoretical results, however, show that
in some situations, which can occur frequently in practice, even when the jump
amplitudes tend to , the conventional method identifies false change
points. This issue is referred to as stair-casing problem and restricts
practical importance of mean filtering. In this paper, sparsity is
penalized more tightly than the norm by exploiting a certain class of
nonconvex functions, while the strict convexity of the consequent optimization
problem is preserved. This results in a higher performance in detecting change
points. To theoretically justify the performance improvements over
mean filtering, deterministic and stochastic sufficient conditions for exact
change point recovery are derived. In particular, theoretical results show that
in the stair-casing problem, our approach might be able to exclude the false
change points, while mean filtering may fail. A number of numerical
simulations assist to show superiority of our method over mean
filtering and another state-of-the-art algorithm that promotes sparsity tighter
than the norm. Specifically, it is shown that our approach can
consistently detect change points when the jump amplitudes become sufficiently
large, while the two other competitors cannot.Comment: Submitted to IEEE Transactions on Signal Processin
- β¦